Many workers across the U.S. are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use.AI 

Employees Utilize ChatGPT To Streamline Work Processes, But Companies Fear Data Breaches

Many workers across the United States are turning to ChatGPT for help with basic tasks, according to a Reuters/Ipsos survey, despite fears that have prompted employers such as Microsoft and Google to curb its use.

Companies around the world are considering how best to leverage ChatGPT, a chatbot that uses generative artificial intelligence to converse with users and respond to countless prompts. However, security firms and companies have expressed concern that it could lead to leakage of intellectual property rights and strategy.

Anecdotal examples of people using ChatGPT in their daily work include drafting emails, summarizing documents, and doing preliminary research.

About 28 percent of those who responded to an online survey about artificial intelligence (AI) between 11 and 17 July said they regularly use ChatGPT at work, while only 22 percent said their employers allow such external tools.

The Reuters/Ipsos poll of 2,625 US adults had a credibility interval, a measure of accuracy, of about 2 percentage points.

About 10% of those surveyed said their bosses specifically banned external AI tools, while about 25% didn’t know if their company allowed them to use the technology.

ChatGPT became the fastest growing app in history after its launch in November. It has sparked both excitement and alarm, putting its developer OpenAI at odds with regulators, particularly in Europe, where the company’s bulk data collection has drawn criticism from privacy watchdogs.

Evaluators from other companies can read any conversations generated, and the researchers found that similar AI can reproduce information absorbed during training, potentially posing a risk to proprietary information.

“People don’t understand how data is used when they use generative AI services,” said Ben King, director of customer trust at enterprise security company Okta.

“For businesses, this is critical because users don’t have a contract with many AIs — because they’re free services — so businesses haven’t taken the risk through their normal evaluation process,” King said.

OpenAI declined to comment when asked about the impact on individual employees using ChatGPT, but highlighted a recent company blog post assuring corporate partners that their data would not be used to further train the chatbot unless they gave express permission.

When people use Google’s Bard, it collects information such as text, location and other usage data. The company allows users to delete past activity from their accounts and request that content fed to the AI be deleted. Alphabet-owned Google declined to comment when asked for more information.

Microsoft did not immediately respond to a request for comment.

“SAFE TASKS”

A US employee at Tinder said the dating app’s employees were using ChatGPT for “harmless tasks” such as writing emails, even though the company does not officially allow it.

“It’s regular emails. Very trivial, like making funny calendar invites for team events, farewell emails when someone leaves… We also use it for general research,” said the employee, who declined to be named because they were not authorized to speak to reporters.

The employee said Tinder doesn’t have a ChatGPT rule, but employees still use it “in a general way that doesn’t reveal anything about us being on Tinder.”

Reuters could not independently confirm how Tinder employees used ChatGPT. Tinder said it provided “regular guidance to employees on data security and information best practices.”

In May, Samsung Electronics banned staff worldwide from using ChatGPT and similar AI tools after discovering that an employee had uploaded sensitive code to the platform.

“We are evaluating measures to create a safe environment for the generative use of artificial intelligence that improves employee productivity and efficiency,” Samsung said in a statement on Aug. 3.

“However, until these measures are complete, we are temporarily restricting the use of generative AI through company devices.”

Reuters reported in June that Alphabet had warned employees about how to use chatbots, including Google’s Bard, while promoting the program globally.

Google said that while Bard may make unwanted code suggestions, it helps programmers. It also said it strives to be open about the limitations of its technology.

PROHIBITIONS

Some companies told Reuters they are deploying ChatGPT and similar platforms while keeping security in mind.

“We have begun to test and learn how artificial intelligence can improve operational efficiency,” said a Coca-Cola spokeswoman in Atlanta, Georgia, adding that the data remains inside its firewall.

“Internally, we recently launched our enterprise version of Coca-Cola ChatGPT to improve productivity,” the spokesperson said, adding that Coca-Cola plans to use AI to improve the efficiency and productivity of its teams.

Meanwhile, Tate & Lyle CFO Dawn Allen told Reuters that the global ingredient maker tried ChatGPT because it “found a way to use it in a secure way”.

“We have different groups deciding how they want to use it through experiments. Should we use it in investor relations? Should we use it in data management? How can we use it to perform tasks effectively?”

Some employees say they can’t use the platform at all on their company computers.

“It’s completely banned on the office network, like it doesn’t work,” said a Procter & Gamble employee, who spoke on condition of anonymity because they were not authorized to speak to the press.

P&G declined to comment. Reuters could not independently confirm whether P&G employees were unable to use ChatGPT.

Paul Lewis, chief information security officer at cybersecurity firm Nominet, said companies were right to be cautious.

“Everyone benefits from this increased capability, but data is not completely secure and can be engineered out,” he said, referring to the “malicious prompts” that can be used to get AI chatbots to reveal information.

“A blanket ban is not warranted yet, but we have to proceed with caution,” Lewis said.

Related posts

Leave a Comment